3 research outputs found

    Content aware multi-focus image fusion for high-magnification blood film microscopy

    Get PDF
    Automated digital high-magnification optical microscopy is key to accelerating biology research and improving pathology clinical pathways. High magnification objectives with large numerical apertures are usually preferred to resolve the fine structural details of biological samples, but they have a very limited depth-of-field. Depending on the thickness of the sample, analysis of specimens typically requires the acquisition of multiple images at different focal planes for each field-of-view, followed by the fusion of these planes into an extended depth-of-field image. This translates into low scanning speeds, increased storage space, and processing time not suitable for high-throughput clinical use. We introduce a novel content-aware multi-focus image fusion approach based on deep learning which extends the depth-of-field of high magnification objectives effectively. We demonstrate the method with three examples, showing that highly accurate, detailed, extended depth of field images can be obtained at a lower axial sampling rate, using 2-fold fewer focal planes than normally required

    Adaptive Autonomous Navigation of Multiple Optoelectronic Microrobots in Dynamic Environments

    Get PDF
    The optoelectronic microrobot is an advanced light-controlled micromanipulation technology which has particular promise for collecting and transporting sensitive microscopic objects such as biological cells. However, wider application of the technology is currently limited by a reliance on manual control and a lack of methods for simultaneous manipulation of multiple microrobotic actuators. In this article, we present a computational framework for autonomous navigation of multiple optoelectronic microrobots in dynamic environments. Combining closed-loop visual-servoing, SLAM, real-time visual detection of microrobots and obstacles, dynamic path-finding and adaptive motion behaviors, this approach allows microrobots to avoid static and moving obstacles and perform a range of tasks in real-world dynamic environments. The capabilities of the system are demonstrated through micromanipulation experiments in simulation and in real conditions using a custom built optoelectronic tweezer system

    Detection of acute promyelocytic leukemia in peripheral blood and bone marrow with annotation-free deep learning

    Get PDF
    While optical microscopy inspection of blood films and bone marrow aspirates by a hematologist is a crucial step in establishing diagnosis of acute leukemia, especially in low-resource settings where other diagnostic modalities are not available, the task remains time-consuming and prone to human inconsistencies. This has an impact especially in cases of Acute Promyelocytic Leukemia (APL) that require urgent treatment. Integration of automated computational hematopathology into clinical workflows can improve the throughput of these services and reduce cognitive human error. However, a major bottleneck in deploying such systems is a lack of sufficient cell morphological object-labels annotations to train deep learning models. We overcome this by leveraging patient diagnostic labels to train weakly-supervised models that detect different types of acute leukemia. We introduce a deep learning approach, Multiple Instance Learning for Leukocyte Identification (MILLIE), able to perform automated reliable analysis of blood films with minimal supervision. Without being trained to classify individual cells, MILLIE differentiates between acute lymphoblastic and myeloblastic leukemia in blood films. More importantly, MILLIE detects APL in blood films (AUC 0.94 ± 0.04) and in bone marrow aspirates (AUC 0.99 ± 0.01). MILLIE is a viable solution to augment the throughput of clinical pathways that require assessment of blood film microscopy
    corecore